Computing Maximum Entropy Distributions Everywhere

نویسندگان

  • Damian Straszak
  • Nisheeth K. Vishnoi
چکیده

We study the problem of computing the maximum entropy distribution with a specified expectation over a large discrete domain. Maximum entropy distributions arise and have found numerous applications in economics, machine learning and various sub-disciplines of mathematics and computer science. The key computational questions related to maximum entropy distributions are whether they have succinct descriptions (polynomial-size in the input) and whether they can be efficiently computed. Here we provide positive answers to both of these questions for very general domains and, importantly, with no restriction on the expectation vector. This completes the picture left open by the prior work on this problem by [38] which requires that the given expectation vector is polynomially far in the interior of the convex hull of the domain. As a consequence of our result we obtain a general algorithmic tool and show how it can be applied to derive several old and new results in a unified manner. In particular, our results imply that certain recent continuous optimization formulations, for instance, for discrete counting and optimization problems, the matrix scaling problem, and the worst case BrascampLieb constants in the rank-1 regime, are efficiently computable. Attaining these implications requires reformulating the underlying problem as a version of maximum entropy computation where optimization also involves the expectation vector and, hence, cannot be assumed to be sufficiently deep in the interior. The key new technical ingredient in our work is a polynomial bound on the bit complexity of near-optimal dual solutions to the maximum entropy convex program. This result is obtained by a geometrical reasoning that involves convex analysis and polyhedral geometry, avoiding combinatorial arguments based on the specific structure of the domain. We also provide a lower bound on the bit complexity of near-optimal solutions showing the tightness of our results.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Determination of Maximum Bayesian Entropy Probability Distribution

In this paper, we consider the determination methods of maximum entropy multivariate distributions with given prior under the constraints, that the marginal distributions or the marginals and covariance matrix are prescribed. Next, some numerical solutions are considered for the cases of unavailable closed form of solutions. Finally, these methods are illustrated via some numerical examples.

متن کامل

A Note on the Bivariate Maximum Entropy Modeling

Let X=(X1 ,X2 ) be a continuous random vector. Under the assumption that the marginal distributions of X1 and X2 are given, we develop models for vector X when there is partial information about the dependence structure between X1  and X2. The models which are obtained based on well-known Principle of Maximum Entropy are called the maximum entropy (ME) mo...

متن کامل

Tsallis Maximum Entropy Lorenz Curves

In this paper, at first we derive a family of maximum Tsallis entropy distributions under optional side conditions on the mean income and the Gini index. Furthermore, corresponding with these distributions a family of Lorenz curves compatible with the optional side conditions is generated. Meanwhile, we show that our results reduce to Shannon entropy as $beta$ tends to one. Finally, by using ac...

متن کامل

Why Skew Normal: A Simple Pedagogical Explanation

In many practical situations, we only know a few first moments of a random variable, and out of all probability distributions which are consistent with this information, we need to select one. When we know the first two moments, we can use the Maximum Entropy approach and get normal distribution. However, when we know the first three moments, the Maximum Entropy approach doe snot work. In such ...

متن کامل

Constrained Signals: A General Theory of Information Content and Detection

Abstract: In this paper, a general theory of signals characterized by probabilistic constraints is developed. As in previous work [10], the theoretical development employs Lagrange multipliers to implement the constraints and the maximum entropy principle to generate the most likely probability distribution function consistent with the constraints. The method of computing the probability distri...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1711.02036  شماره 

صفحات  -

تاریخ انتشار 2017